Las Palmas de Gran Canaria
Implementation of the Principal Component Analysis onto High-Performance Computer Facilities for Hyperspectral Dimensionality Reduction: Results and Comparisons
Martel, E., Lazcano, R., Lopez, J., Madroñal, D., Salvador, R., Lopez, S., Juarez, E., Guerra, R., Sanz, C., Sarmiento, R.
Dimensionality reduction represents a critical preprocessing step in order to increase the efficiency and the performance of many hyperspectral imaging algorithms. However, dimensionality reduction algorithms, such as the Principal Component Analysis (PCA), suffer from their computationally demanding nature, becoming advisable for their implementation onto high-performance computer architectures for applications under strict latency constraints. This work presents the implementation of the PCA algorithm onto two different high-performance devices, namely, an NVIDIA Graphics Processing Unit (GPU) and a Kalray manycore, uncovering a highly valuable set of tips and tricks in order to take full advantage of the inherent parallelism of these high-performance computing platforms, and hence, reducing the time that is required to process a given hyperspectral image. Moreover, the achieved results obtained with different hyperspectral images have been compared with the ones that were obtained with a field programmable gate array (FPGA)-based implementation of the PCA algorithm that has been recently published, providing, for the first time in the literature, a comprehensive analysis in order to highlight the pros and cons of each option.
Evaluating ChatGPT text-mining of clinical records for obesity monitoring
Fins, Ivo S., Davies, Heather, Farrell, Sean, Torres, Jose R., Pinchbeck, Gina, Radford, Alan D., Noble, Peter-John
Background: Veterinary clinical narratives remain a largely untapped resource for addressing complex diseases. Here we compare the ability of a large language model (ChatGPT) and a previously developed regular expression (RegexT) to identify overweight body condition scores (BCS) in veterinary narratives. Methods: BCS values were extracted from 4,415 anonymised clinical narratives using either RegexT or by appending the narrative to a prompt sent to ChatGPT coercing the model to return the BCS information. Data were manually reviewed for comparison. Results: The precision of RegexT was higher (100%, 95% CI 94.81-100%) than the ChatGPT (89.3%; 95% CI82.75-93.64%). However, the recall of ChatGPT (100%. 95% CI 96.18-100%) was considerably higher than that of RegexT (72.6%, 95% CI 63.92-79.94%). Limitations: Subtle prompt engineering is needed to improve ChatGPT output. Conclusions: Large language models create diverse opportunities and, whilst complex, present an intuitive interface to information but require careful implementation to avoid unpredictable errors.
Automation is not grounds for firing employees, judge says
In landmark case, labor court in Las Palmas, Spain, rules that replacing a worker with an automated system is not grounds for eliminating that worker's employment. The ruling, the first of its kind in Spain, could affect many sectors such as accounting, transport, logistics, and manufacturing. It establishes that positions currently manned by human workers can't be replaced by machines or software automation. Spain has some of the most restrictive labor regulations in Europe to terminate employment. Several organizations, including the OECD, claim that the lack of flexibility to hire and fire employees is one of the causes of the country's high unemployment rate.